Atrial Fibrillation (AF) is characterized by disorganised electrical activity in the atria and is known to be sustained by the presence of regions of fibrosis (scars) or functional cellular remodeling, both of which may lead to areas of slow conduction. Estimating the effective conductivity of the myocardium and identifying regions of abnormal propagation is therefore crucial for the effective treatment of AF. We hypothesise that the spatial distribution of tissue conductivity can be directly inferred from an array of concurrently acquired contact electrograms (EGMs). We generate a dataset of simulated cardiac AP propagation using randomised scar distributions and a phenomenological cardiac model and calculate contact electrograms at various positions on the field. A deep neural network, based on a modified U-net architecture, is trained to estimate the location of the scar and quantify conductivity of the tissue with a Jaccard index of $91$%. We adapt a wavelet-based surrogate testing analysis to confirm that the inferred conductivity distribution is an accurate representation of the ground truth input to the model. We find that the root mean square error (RMSE) between the ground truth and our predictions is significantly smaller ($p_{val}=0.007$) than the RMSE between the ground truth and surrogate samples.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
开放程序代表全球手术的主要形式。人工智能(AI)有可能优化手术实践并改善患者结果,但努力主要集中在微创技术上。我们的工作通过策划,从YouTube,从YouTube,Open Surgical视频的最大数据集克服了培训AI模型的现有数据限制:1997年从50个国家上传的23个外科手术的视频。使用此数据集,我们开发了一种能够实时了解外科行为,手和工具的多任务AI模型 - 程序流程和外科医生技能的构建块。我们表明我们的模型推广了各种外科类型和环境。说明这种普遍性,我们直接应用了YouTube培训的模型,分析了在学术医疗中心前瞻性收集的开放式手术,并确定了与手动效率相关的外科技能的运动学描述符。我们的开放外科(AVOS)数据集和培训模式的注释视频将可用于进一步发展外科艾。
translated by 谷歌翻译
为了在结构因果模型(SCM)中执行反事实推理,需要了解因果机制,它提供条件分布的因子,并将噪声映射到样本的确定性函数。遗憾的是,因象无法通过观察和与世界互动收集的数据唯一确定的因果机制,因此仍然存在如何选择因果机制的问题。最近的工作中,Oberst&Sontag(2019)提出了Gumbel-Max SCM,它由于直观上吸引的反事实稳定性而导致Gumbel-Max Reparameterizations作为因果机制。在这项工作中,我们认为选择在估算反事实治疗效果时最小化的定量标准的因果机制,例如最小化方差。我们提出了一个参数化的因果机制,概括了Gumbel-Max。我们表明他们可以接受培训,以最大限度地减少对感兴趣查询的分布的反事实效果方差和其他损失,从而产生比固定替代方案的反复治疗效果的较低方差估计,也推广到在培训时间未见的查询。
translated by 谷歌翻译
天文学家通常已经着手通过从头开始创建自己的表示来解决监督的机器学习问题。我们表明,经过训练的深度学习模型,可以回答每个星系动物园贴花问题问题,即学习星系的有意义的语义表示,这些语义表示对于从未训练过的新任务很有用。我们利用这些表示形式优于最近对研究大型星系样本至关重要的实际任务的方法。第一个任务是识别与查询星系相似的形态的星系。给定一个星系为人类分配了一个免费文本标签(例如“ #diffuse”),我们可以找到与大多数标签匹配该标签的星系。第二个任务是确定特定研究人员最有趣的异常。我们的方法在识别最有趣的100个异常(由Galaxy Zoo 2志愿者判断)方面是100%准确的。第三个任务是调整模型来仅使用少数新标记的星系解决新任务。与从陆地图像(ImageNet)或从头开始训练的模型相比,从我们的表示形式进行微调的模型可以更好地识别环形星系。我们用很少的新标签解决每个任务;一个(用于相似性搜索)或数百个(用于异常检测或微调)。这挑战了长期以来的观点,即深度监督方法需要新的大型标签数据集,以便在天文学中实际使用。为了帮助社区受益于我们验证的模型,我们发布了我们的微调代码Zoobot。没有先前经验的研究人员可以访问Zoobot。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
我们介绍了Galaxy动物园贴花:SDSS DR8占地面积的星系中的黑色能量相机传统调查图像的详细视觉形态学分类。更深的贴花图像(R = 23.6与SDSS的r = 22.2)显示螺旋臂,弱杆和在SDSS成像中未见的潮汐功能。为了最佳利用较大的贴花图像,志愿者从一套新的答案中选择,旨在提高对合并和酒吧的敏感性。 Galaxy动物园志愿者提供750万个单独的分类超过314,000个星系。 140,000个星系收到至少30分类,足以准确测量像条状的详细的形态,其余的收到约5.所有分类都用于培训贝叶斯卷积神经网络的集合(一种最先进的深度学习方法)预测所有314,000个星系的详细形态的后海外。当衡量自信的志愿者分类时,每个问题的网络大约有99%。形态学是每个星系的基本特征;我们的人机和机器分类是理解星系如何发展的准确和详细资源。
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译